Vision Transformer (ViT) extends the application range of transformers from language processing to computer vision tasks as being an alternative architecture against the existing convolutional neural networks (CNN). Since the transformer-based architecture has been innovative for computer vision modeling, the design convention towards an effective architecture has been less studied yet. From the successful design principles of CNN, we investigate the role of spatial dimension conversion and its effectiveness on transformer-based architecture. We particularly attend to the dimension reduction principle of CNNs; as the depth increases, a conventional CNN increases channel dimension and decreases spatial dimensions. We empirically show that such a spatial dimension reduction is beneficial to a transformer architecture as well, and propose a novel Pooling-based Vision Transformer (PiT) upon the original ViT model. We show that PiT achieves the improved model capability and generalization performance against ViT. Throughout the extensive experiments, we further show PiT outperforms the baseline on several tasks such as image classification, object detection, and robustness evaluation. Source codes and ImageNet models are available at https://github.com/naver-ai/pit.
translated by 谷歌翻译
弱监督的对象本地化(WSOL)在过去几年中获得了普及,以便培训具有图像级标签的本地化模型。由于Soliminal WSOL类激活映射(CAM),该领域的重点是如何扩展注意区域更广泛地覆盖物体并更好地本地化。但是,这些策略依赖于验证超参数和模型选择的完全本地化监督,这是原则上禁止WSOL设置。在本文中,我们认为WSOL任务仅用图像级标签均不含糊,并提出了一种新的评估协议,其中全面监督仅限于仅与测试集没有重叠的小型举出的设置。我们观察到,根据我们的协议,五种最新的WSOL方法没有对CAM基线进行重大改进。此外,我们报告说,现有的WSOL方法尚未达到几次学习基准,其中验证时间的全面监督用于模型培训。根据我们的调查结果,我们讨论了WSOL的​​一些未来方向。
translated by 谷歌翻译
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CI-FAR and ImageNet classification tasks, as well as on the Im-ageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https://github.com/clovaai/CutMix-PyTorch.
translated by 谷歌翻译
For low-level computer vision and image processing ML tasks, training on large datasets is critical for generalization. However, the standard practice of relying on real-world images primarily from the Internet comes with image quality, scalability, and privacy issues, especially in commercial contexts. To address this, we have developed a procedural synthetic data generation pipeline and dataset tailored to low-level vision tasks. Our Unreal engine-based synthetic data pipeline populates large scenes algorithmically with a combination of random 3D objects, materials, and geometric transformations. Then, we calibrate the camera noise profiles to synthesize the noisy images. From this pipeline, we generated a fully synthetic image denoising dataset (FSID) which consists of 175,000 noisy/clean image pairs. We then trained and validated a CNN-based denoising model, and demonstrated that the model trained on this synthetic data alone can achieve competitive denoising results when evaluated on real-world noisy images captured with smartphone cameras.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
在公共危机时期,寻求信息对于人们的自我保健和福祉至关重要。广泛的研究调查了经验理解和技术解决方案,以促进受影响地区的家庭公民寻求信息。但是,建立有限的知识是为了支持需要在其东道国发生危机的国际移民。当前的论文对居住在日本和美国(n = 14)的两名中国移民(n = 14)进行了访谈研究。参与者反思了他们在共同大流行期间寻求经验的信息。反思补充了两周的自我追踪,参与者保持了相关信息寻求实践的记录。我们的数据表明,参与者经常绕开语言绕道,或访问普通话资源以获取有关其东道国疫情爆发的信息。他们还进行了战略性利用普通话信息,以进行选择性阅读,交叉检查以及对日语或英语的共同信息的上下文化解释。尽管这种做法增强了参与者对共同相关信息收集和感官的有效性,但他们有时会通过有时认识的方式使人们处于不利地位。此外,参与者缺乏对审查以移民为导向的信息的认识或偏爱,尽管该信息可用,这些信息是由东道国公共当局发布的。在这些发现的基础上,我们讨论了改善国际移民在非本地语言和文化环境中寻求共同相关信息的解决方案。我们主张包容性危机基础设施,这些基础设施将吸引以当地语言流利程度,信息素养和利用公共服务的经验的不同水平的人们。
translated by 谷歌翻译
自然语言处理(NLP)推论正在看到移动应用程序的采用量增加,在此,对于至关重要的保留用户数据隐私和避免网络往返的推论是必需的。然而,NLP模型的前所未有的大小强调了延迟和内存,这是移动设备的两个关键资源。为了满足目标延迟,将整个模型保存在内存中会尽快启动执行,但将一个应用程序的内存足迹增加了几次,将其收益限制为仅在被移动内存管理回收之前的一些推论。另一方面,从存储按需加载模型会导致几秒钟的io长,远远超过了用户满足的延迟范围;由于IO和计算延迟之间的偏斜度很大,因此管道层的模型加载和执行也不会隐藏IO。为此,我们提出了Speedy Transformer推断(STI)。 STI建立在模型最重要的部分上最大化IO/计算资源利用率的关键思想,通过两种新颖的技术来调和延迟/记忆张力。首先,模型碎片。 STI将模型参数视为独立可调的碎片,并介绍了其对准确性的重要性。其次,带有预紧缓冲液的弹性管道计划。 STI实例化IO/计算管道,并使用一个小的缓冲区进行预加载碎片来进行引导执行,而不会在早期阶段停滞不前;它根据资源弹性执行的重要性明智地选择,调音和汇编碎片,从而最大程度地提高推理精度。在两个商品SoC上,我们在实用的目标潜伏期以及CPU和GPU上建立了STI并根据广泛的NLP任务进行评估。我们证明,STI提供高精度的高度较低的记忆级,表现优于竞争基准。
translated by 谷歌翻译
多级优化已被广泛用作无数机器学习问题的数学基础,例如超参数优化,元学习和增强学习,仅举几例。尽管如此,实施多级优化程序通常需要在数学和编程方面的专业知识,这在该领域的研究都阻碍了研究。我们通过引入贝蒂(Betty)(用于基于梯度的多级优化的高级软件库)迈出了缩小这一差距的第一步。为此,我们基于对多级优化作为数据流图的新解释开发自动分化过程。我们进一步将多级优化的主要组成部分作为Python类,以实现简单,模块化和可维护的编程。我们从经验上证明,Betty可以用作一系列多级优化程序的高级编程接口,同时观察到测试准确性的提高11 \%,GPU存储器使用率下降14 \%,而20 \%降低了。在多个基准上的现有实现的墙壁时间。该代码可从http://github.com/leopard-ai/betty获得。
translated by 谷歌翻译
在这个接近中间尺度的量子时代,云上有两种类型的近期量子设备:基于离散变量模型和线性光学器件(Photonics)QPU的超导量子处理单元(QPU),基于连续变量(CV)) 模型。离散变量模型中的量子计算以有限的尺寸量子状态空间和无限尺寸空间中的CV模型执行。在实现量子算法时,CV模型提供了更多的量子门,这些量子门在离散变量模型中不可用。基于简历的光子量子计算机使用不同的测量方法和截止尺寸的概念来控制量子电路的输出向量长度的额外灵活性。
translated by 谷歌翻译
归纳转移学习旨在通过利用源任务中的预训练模型来从少量培训数据中学习目标任务。大多数涉及大规模深度学习模型的策略采用预先培训的模型和进行目标任务进行初始化。但是,当使用过度参数化模型时,我们通常可以在不牺牲源任务的准确性的情况下修剪模型。这促使我们采用模型修剪来通过深度学习模型进行转移学习。在本文中,我们提出了PAC-NET,这是一种简单而有效的方法,用于基于修剪的转移学习。 PAC-NET由三个步骤组成:修剪,分配和校准(PAC)。这些步骤背后的主要思想是确定源任务的基本权重,通过更新基本权重来微调源任务,然后通过更新剩余的冗余权重来校准目标任务。在各种广泛的感应转移学习实验集中,我们表明我们的方法通过很大的边距实现了最先进的性能。
translated by 谷歌翻译